173 research outputs found
Recommended from our members
Performance Envelopes of Virtual Keyboard Text Input Strategies in Virtual Reality
Virtual and Augmented Reality deliver engaging interaction experiences that can transport and extend the capabilities of the user. To ensure these paradigms are more broadly usable and effective, however, it is necessary to also deliver many of the conventional functions of a smartphone or personal computer. It remains unclear how conventional input tasks, such as text entry, can best be translated into virtual and augmented reality. In this paper we examine the performance potential of four alternative text entry strategies in virtual reality (VR). These four strategies are selected to provide full coverage of two fundamental design dimensions: i) physical surface association; and ii) number of engaged fingers. Specifically, we examine typing with index fingers on a surface and in mid-air and typing using all ten fingers on a surface and in mid-air. The central objective is to evaluate the human performance potential of these four typing strategies without being constrained by current tracking and statistical text decoding limitations. To this end we introduce an auto-correction simulator that uses knowledge of the stimulus to emulate statistical text decoding within constrained experimental parameters and use high-precision motion tracking hardware to visualise and detect fingertip interactions. We find that alignment of the virtual keyboard with a physical surface delivers significantly faster entry rates over a mid-air keyboard. Also, users overwhelmingly fail to effectively engage all ten fingers in mid-air typing, resulting in slower entry rates and higher error rates compared to just using two index fingers. In addition to identifying the envelopes of human performance for the four strategies investigated, we also provide a detailed analysis of the underlying features that distinguish each strategy in terms of its performance and behaviour.This work was supported by Facebook Reality Labs and by EPSRC (grant EP/R004471/1)
Recommended from our members
Understanding the effects of code presentation
The majority of software is still written using text-based programming languages. With today’s large, high-resolution color displays, developers have devised their own “folk design” methodologies to exploit these advances. As software becomes more and more critical to everyday life, supporting developers in rapidly producing and revising code accurately should be a priority. We consider how layout, typefaces, anti-aliasing, syntax highlighting, and semantic highlighting might impact developer efficiency, and accuracy.This is the author accepted manuscript. The final version is available from the Association for Computing Machinery via http://dx.doi.org/10.1145/2846680.284668
Understanding Adoption Barriers to Dwell-Free Eye-Typing: Design Implications from a Qualitative Deployment Study and Computational Simulations
Eye-typing is a slow and cumbersome text entry method typically used by individuals with no other practical means of communication. As an alternative, prior HCI research has proposed dwell-free eye-typing as a potential improvement that eliminates time-consuming and distracting dwell-timeouts. However, it is rare that such research ideas are translated into working products. This paper reports on a qualitative deployment study of a product that was developed to allow users access to a dwell-free eye-typing research solution. This allowed us to understand how such a research solution would work in practice, as part of users\u27 current communication solutions in their own homes. Based on interviews and observations, we discuss a number of design issues that currently act as barriers preventing widespread adoption of dwell-free eye-typing. The study findings are complemented with computational simulations in a range of conditions that were inspired by the findings in the deployment study. These simulations serve to both contextualize the qualitative findings and to explore quantitative implications of possible interface redesigns. The combined analysis gives rise to a set of design implications for enabling wider adoption of dwell-free eye-typing in practice
Recommended from our members
Performance Envelopes of In-Air Direct and Smartwatch Indirect Control for Head-Mounted Augmented Reality
The scarcity of established input methods for augmented reality (AR) head-mounted displays (HMD) motivates us to investigate the performance envelopes of two easily realisable solutions: indirect cursor control via a smartwatch and direct control by in-air touch. Indirect cursor control via a smartwatch has not been previously investigated for AR HMDs. We evaluate these two techniques in three fundamental user interface actions: target acquisition, goal crossing, and circular steering. We find that in-air is faster than smartwatch (p<0.001) for target acquisition and circular steering. We observe, however, that in-air selection can lead to discomfort after extended use and suggest that smartwatch control offers a complementary alternative.This work was supported by EPSRC (grant number EP/N010558/1)
and the Trimble Fund. Part of this work was conducted within
the Transregional Collaborative Research Centre SFB/TRR 62
Companion-Technology of Cognitive Technical Systems funded
by the German Research Foundation (DFG)
Recommended from our members
A Review of User Interface Design for Interactive Machine Learning
Interactive Machine Learning (IML) seeks to complement human perception and intelligence by tightly integrating these strengths with the computational power and speed of computers. The interactive process is designed to involve input from the user but does not require the background knowledge or experience that might be necessary to work with more traditional machine learning techniques. Under the IML process, non-experts can apply their domain knowledge and insight over otherwise unwieldy datasets to find patterns of interest or develop complex data driven applications. This process is co-adaptive in nature and relies on careful management of the interaction between human and machine. User interface design is fundamental to the success of this approach, yet there is a lack of consolidated principles on how such an interface should be implemented. This article presents a detailed review and characterisation of Interactive Machine Learning from an interactive systems perspective. We propose and describe a structural and behavioural model of a generalised IML system and identify solution principles for building effective interfaces for IML. Where possible, these emergent solution principles are contextualised by reference to the broader human-computer interaction literature. Finally, we identify strands of user interface research key to unlocking more efficient and productive non-expert interactive machine learning applications
Recommended from our members
Ticker: An Adaptive Single-Switch Text Entry Method for Visually Impaired Users.
Ticker is a probabilistic stereophonic single-switch text entry method for visually-impaired users with motor disabilities who rely on single-switch scanning systems to communicate. Such scanning systems are sensitive to a variety of noise sources, which are inevitably introduced in practical use of single-switch systems. Ticker uses a novel interaction model based on stereophonic sound coupled with statistical models for robust inference of the user's intended text in the presence of noise. As a consequence of its design, Ticker is resilient to noise and therefore a practical solution for single-switch scanning systems. Ticker's performance is validated using a combination of simulations and empirical user studies.The work was funded by the Aegis EU project and the Gatsby Charitable
Foundation
- …